Light-Field Imaging 101

A comprehensive introduction to unfocused light-field imaging (plenoptic 1.0)

Author

David Nguyen

Modified

May 8, 2025

1 Intended Audience

This guide aims to expand upon existing unfocused light-field (plenoptic 1.0) imaging learning material by providing detailed Python implementations of light-field image processing along with examples. The light-field image processing workflow closely follows that of the MATLAB-based Light-Field Imaging Toolkit [1], but leverages Python’s open-source ecosystem and Quarto’s enhanced capabilities for visualization and LaTeX formula integration.

By focusing specifically on the image processing aspects, this guide helps readers better understand the technical limitations and practical considerations of light-field imaging techniques. This guide assumes the reader is familiar with unfocused light-field imaging. Such an understanding can be achieved by browsing plenoptic.info or reading the unfocused light-field part of the Light-Field Camera Working Principles chapter (Pages 11-25) of the book Development and Application of Light-Field Cameras in Fluid Measurements [2].

2 Preprocessing

2.1 Raw Images

To illustrate light-field image processing concepts, a synthetic unfocused light-field image was generated and is shown in Figure 1. The image describes the raw pixel intensities as recorded by an unfocused light-field camera. Every lenslet of the camera projects the incident radiance into a defocused intensity distribution pattern. This manifests as a spatially constrained blur where the light energy from each microlens is distributed across multiple sensor pixels rather than being concentrated at the expected conjugate position. The image was generated using a commercial ray tracing software (OpticStudio, ANSYS).

Figure 1: Synthetic unfocused light-field image. The hexagonal patterning in the image is characteristic of the projection process by the microlens array.

2.2 Microlens Array Structure

The unfocused light-field camera model implemented in this investigation utilizes a hexagonal microlens array configuration for two primary reasons. First, hexagonal microlens arrays provide optimal spatial packing efficiency, thereby maximizing the effective sensor area utilization. Second, the hexagonal pattern introduces additional computational complexity in light-field image processing algorithms—specifically in sampling pattern interpolation—that merits thorough examination within this methodological framework.

The hexagonal microlens array has 51 × 51 complete lenslets. This core array is supplemented with partial lenslets along the periphery to achieve an overall rectangular shape. The total number of elements in the array is 2 729, which corresponds to the expected number of spots in the calibration image.

2.3 Calibration

The calibration process seeks to pinpoint the centroid of each lenslet’s corresponding sensor region. A calibration image is acquired with the main lens’s aperture reduced to a minimum. The resulting calibration image consist of an array of small bright spots as shown in Figure 2.

Figure 2: Synthetic light-field calibration image. Every bright spot is indicative of the corresponding lenslet centroid on the sensor.

The centroid identification process based on intensity peaks in an image is a standard image processing technique widely documented in the literature, not exclusive to light-field imaging. Since the synthetic calibration image contains no sensor noise, the calibration procedure has been intentionally simplified to emphasize conceptual clarity and streamline the explanation.

The spot centroid detection procedure consists of three main steps. First, a manual threshold is applied to the calibration image to identify potential spot locations. Second, a binary dilation operation using a disk-shaped structuring element is performed to expand these areas, ensuring that the subsequent weighted centroid calculation encompasses the full spot and its immediate surroundings. Finally, the scikit-image regionprops function is used to calculate the intensity-weighted centroids of each spot.

Code
import numpy as np

from skimage import morphology
from skimage.morphology import disk
from skimage.measure import label, regionprops

# Load calibration image
calibration_image = tifffile.imread('calibration.tif')

# Apply hard-coded threshold (chosen to match the expected number of centroids)
binary_calibration = calibration_image>16

# Perform a binary dilation to ensure coverage of the spots for the computation
# of the weighted centroid
dilated_calibration = morphology.binary_dilation(binary_calibration, disk(4))

# Create labels 
labeled_calibration = label(dilated_calibration)

# Apply regionprops
regions_calibration = regionprops(labeled_calibration, intensity_image=calibration_image)

# Retrieve weighted centroids location and convert list to numpy array
centroids = np.array([region.centroid_weighted for region in regions_calibration])

print(f'Number of centroids detected: {len(centroids):,}'.replace(',', '\u2009'))
Number of centroids detected: 2 729

The threshold value was manually selected to ensure the number of detected centroids matches the total lenslet count (2 729) as described in Section 2.2.

The centroid localization procedure is demonstrated in Figure 3 using a representative calibration spot. The figure displays three frames: the first showing a magnified view of an individual calibration spot, the second showing the binary mask generated through thresholding, and the third showing the binary mask after a morphological dilation operation. The calculated weighted centroid position, determined by applying the dilated mask to the original intensity distribution of the spot, is indicated by a pink cross marker.

Figure 3: Close-up of a calibration spot showing the centroid identification process. Toggle view displays: ‘Calibration ROI’ (original spot from calibration image), ‘Thresholded ROI’ (binary mask after thresholding), or ‘Dilated ROI’ (binary mask after morphological dilation). Pink cross indicates the weighted centroid location calculated from the original intensity distribution using the dilated mask.

2.4 Reshaping

The next step is to reshape the 2D light-field image into a 4D light-field array as a function of U and V (the spatial coordinates), and S and T (the angular coordinates). Each centroid location from Section 2.3 has coordinates corresponding to a position in the S and T space. From this position, a circular U and V map is extracted from the light-field image for each centroid. An interpolation step is introduced to have the centroid aligned on the pixel grid. A masking operation is performed to remove information from neighbouring lenslets.

Code
from scipy.interpolate import interpn

def get_roi(image, center, half_size):
    """
    Extract a square Region of Interest (ROI) from an image.
    
    Parameters:
    -----------
    image : ndarray
        The input image from which to extract the ROI.
    center : tuple of int
        The (x, y) coordinates of the center point of the ROI as integer indices.
    half_size : int
        Half the size of the ROI. The total width/height will be (2*half_size+1).
        
    Returns:
    --------
    ndarray
        A square sub-image centered at the specified coordinates with 
        dimensions (2*half_size+1) × (2*half_size+1).
    """
    x, y = center
    return image[x-half_size:x+half_size+1, y-half_size:y+half_size+1]

# Load raw light-field image
raw_light_field_image = tifffile.imread('raw-light-field.tif')

# Hard-coded UV map radius
uv_radius = 17

# Hard-coded margin (for interpolation)
margin = 3

# Initialize integer grid
integer_grid = np.arange(-(uv_radius+margin), uv_radius+margin+1)
u_int_grid, v_int_grid = np.meshgrid(integer_grid, integer_grid)

# Create circular mask
circular_mask = u_int_grid**2 + v_int_grid**2 <= uv_radius**2

# Initialize light-field array
uv_diameter = 2*uv_radius+1
light_field_array = np.zeros((len(centroids), uv_diameter, uv_diameter))

# Loop over the number of detected centroids
for ii, centroid in enumerate(centroids):
    # Rounded centroid location
    rounded_centroid = centroid.astype(int)

    # Calculate centroid grid offset
    offset = centroid - rounded_centroid

    # Create new grids for interpolation
    u_grid, v_grid = np.meshgrid(integer_grid+offset[0], integer_grid+offset[1])

    # Extract ROI around the centroid
    uv_roi = get_roi(raw_light_field_image, rounded_centroid, uv_radius+margin)

    # Interpolate the ROI to the offset grid
    interpolated_roi = interpn(
        (integer_grid, integer_grid),
        uv_roi,
        (v_grid, u_grid),
        bounds_error=False,
        fill_value=None
        )

    # Populate light-field array and perform masking operation (removing the margin)
    light_field_array[ii, :, :] = (interpolated_roi*circular_mask)[margin:-margin, margin:-margin]

The reshaping procedure is demonstrated in Figure 4 using a representative angular (U and V) space in the light-field image. The figure displays three frames: the first showing an angular space from the light-field image at the location of a rounded calibration spot, the second showing the same angular space interpolated at the exact location of the centroid (aligned with the pixel grid), and the third showing the circular masking to reject angular information from neighboring lenslets. The pink cross marker indicates the weighted centroid position. Yellow axes indicate the center of the ROI.

Figure 4: Close-up of a light-field angular (U, V) map showing the reshaping process. Toggle view displays: ‘UV Map From ROI’ (original angular space from light-field image at a specific lenslet location), ‘Interpolated UV Map’ (angular space with centroid aligned to pixel grid at the center of the close-up), or ‘Masked UV Map’ (circular mask applied to retain only information from the corresponding lenslet). Pink cross indicates the weighted centroid location. Yellow axes indicate the center of the ROI.

References

[1]
J. Bolan, E. Hall, C. Clifford, and B. Thurow, “Light-field imaging toolkit,” SoftwareX, vol. 5, pp. 101–106, 2016.
[2]
S. Shi and T. New, Development and application of light-field cameras in fluid measurements. Springer, 2023.